Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 282
Filter
1.
PLoS One ; 19(4): e0301364, 2024.
Article in English | MEDLINE | ID: mdl-38630681

ABSTRACT

Although a rich academic literature examines the use of fake news by foreign actors for political manipulation, there is limited research on potential foreign intervention in capital markets. To address this gap, we construct a comprehensive database of (negative) fake news regarding U.S. firms by scraping prominent fact-checking sites. We identify the accounts that spread the news on Twitter (now X) and use machine-learning techniques to infer the geographic locations of these fake news spreaders. Our analysis reveals that corporate fake news is more likely than corporate non-fake news to be spread by foreign accounts. At the country level, corporate fake news is more likely to originate from African and Middle Eastern countries and tends to increase during periods of high geopolitical tension. At the firm level, firms operating in uncertain information environments and strategic industries are more likely to be targeted by foreign accounts. Overall, our findings provide initial evidence of foreign-originating misinformation in capital markets and thus have important policy implications.


Subject(s)
Disinformation , Geography , Databases, Factual , Industry
4.
PLoS One ; 19(4): e0301818, 2024.
Article in English | MEDLINE | ID: mdl-38593132

ABSTRACT

The widespread dissemination of misinformation on social media is a serious threat to global health. To a large extent, it is still unclear who actually shares health-related misinformation deliberately and accidentally. We conducted a large-scale online survey among 5,307 Facebook users in six sub-Saharan African countries, in which we collected information on sharing of fake news and truth discernment. We estimate the magnitude and determinants of deliberate and accidental sharing of misinformation related to three vaccines (HPV, polio, and COVID-19). In an OLS framework we relate the actual sharing of fake news to several socioeconomic characteristics (age, gender, employment status, education), social media consumption, personality factors and vaccine-related characteristics while controlling for country and vaccine-specific effects. We first show that actual sharing rates of fake news articles are substantially higher than those reported from developed countries and that most of the sharing occurs accidentally. Second, we reveal that the determinants of deliberate vs. accidental sharing differ. While deliberate sharing is related to being older and risk-loving, accidental sharing is associated with being older, male, and high levels of trust in institutions. Lastly, we demonstrate that the determinants of sharing differ by the adopted measure (intentions vs. actual sharing) which underscores the limitations of commonly used intention-based measures to derive insights about actual fake news sharing behaviour.


Subject(s)
Infertility , Social Media , Vaccines , Humans , Male , Disinformation , Africa South of the Sahara/epidemiology
5.
RECIIS (Online) ; 18(1)jan.-mar. 2024.
Article in Portuguese | LILACS, Coleciona SUS | ID: biblio-1553441

ABSTRACT

Considerando-se a crescente importância do YouTube como fonte para busca de informações em saúde, o objetivo deste trabalho é analisar os fatores associados a um maior número de visualizações de vídeos sobre vacinas contra a covid-19. Para isso, usaram-se técnicas de Processamento de Linguagem Natural e modelagem estatística com base em 13.619 vídeos, abrangendo três tipos de variáveis: métricas gerais, conteúdo textual dos títulos e informações sobre os participantes dos vídeos. Entre os resultados, destacam-se os vídeos de duração média ou longa, postados durante a madrugada e nos fins de semana, com tags, descrição e títulos curtos, além de elementos controversos e presença de figuras masculinas e brancas em miniaturas. Os achados contribuem para uma melhor compreensão dos possíveis fatores a serem considerados na produção de conteúdo de comunicação em saúde sobre vacinas no YouTube.


Considering the growing importance of YouTube as a source for health information search, the aim of this study was to analyze the factors associated with a higher number of views in videos about covid-19 vaccines. For this purpose, Natural Language Processing techniques and statistical modeling were employed based on 13,619 videos, encompassing three types of variables: general metrics, textual content of titles, and information about the participants in the videos. Among the results, videos of medium or long duration, posted during late hours and on weekends, with tags, descriptions, and short titles, along with controversial elements and the presence of male and white figures in thumbnails stand out. These findings contribute to a better understanding of the potential factors to be considered in the production of health communication content about vaccines on YouTube.


Teniendo en cuenta la creciente importancia de YouTube como fuente de búsqueda de información en salud, el objetivo de este artículo es analizar los factores asociados a un mayor número de visualizaciones en videos sobre vacunas contra el covid-19. Para eso, se emplearon técnicas de Procesamiento del Lenguaje Natural y modelado estadístico basadas en 13,619 videos, que abarcan tres tipos de variables: métricas generales, contenido textual de títulos y información sobre los participantes en los videos. Entre los resultados, destacan los videos de duración media o larga, publicados durante altas horas de la noche y los fines de semana, con tags, descripciones y títulos cortos, junto con elementos controvertidos y la presencia de figuras masculinas y blancas en las miniaturas. Estos hallazgos contribuyen a una mejor comprensión de los posibles factores a tener en cuenta en la producción de contenido de comunicación de salud sobre vacunas en YouTube.


Subject(s)
Communications Media , Information Dissemination , Health Communication , Social Media , COVID-19 Vaccines , COVID-19 , Health Education , Access to Information , Disinformation , Mass Media
6.
PLoS One ; 19(3): e0300497, 2024.
Article in English | MEDLINE | ID: mdl-38512834

ABSTRACT

Disinformation-false information intended to cause harm or for profit-is pervasive. While disinformation exists in several domains, one area with great potential for personal harm from disinformation is healthcare. The amount of disinformation about health issues on social media has grown dramatically over the past several years, particularly in response to the COVID-19 pandemic. The study described in this paper sought to determine the characteristics of multimedia social network posts that lead them to believe and potentially act on healthcare disinformation. The study was conducted in a neuroscience laboratory in early 2022. Twenty-six study participants each viewed a series of 20 either honest or dishonest social media posts, dealing with various aspects of healthcare. They were asked to determine if the posts were true or false and then to provide the reasoning behind their choices. Participant gaze was captured through eye tracking technology and investigated through "area of interest" analysis. This approach has the potential to discover the elements of disinformation that help convince the viewer a given post is true. Participants detected the true nature of the posts they were exposed to 69% of the time. Overall, the source of the post, whether its claims seemed reasonable, and the look and feel of the post were the most important reasons they cited for determining whether it was true or false. Based on the eye tracking data collected, the factors most associated with successfully detecting disinformation were the total number of fixations on key words and the total number of revisits to source information. The findings suggest the outlines of generalizations about why people believe online disinformation, suggesting a basis for the development of mid-range theory.


Subject(s)
COVID-19 , Social Media , Humans , Disinformation , Pandemics , Health Facilities , Laboratories , COVID-19/epidemiology
7.
BMJ ; 384: q579, 2024 03 20.
Article in English | MEDLINE | ID: mdl-38508671
8.
BMJ ; 384: e078538, 2024 03 20.
Article in English | MEDLINE | ID: mdl-38508682

ABSTRACT

OBJECTIVES: To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities. DESIGN: Repeated cross sectional analysis. SETTING: Publicly accessible LLMs. METHODS: In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated: OpenAI's GPT-4 (via ChatGPT and Microsoft's Copilot), Google's PaLM 2 and newly released Gemini Pro (via Bard), Anthropic's Claude 2 (via Poe), and Meta's Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics: sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. MAIN OUTCOME MEASURES: The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation. RESULTS: Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts-although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported. CONCLUSIONS: This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation.


Subject(s)
Camelids, New World , Skin Neoplasms , Humans , Animals , Disinformation , Artificial Intelligence , Cross-Sectional Studies , Sunscreening Agents , Language
9.
PLoS One ; 19(3): e0299031, 2024.
Article in English | MEDLINE | ID: mdl-38478479

ABSTRACT

Public comments are an important opinion for civic when the government establishes rules. However, recent AI can easily generate large quantities of disinformation, including fake public comments. We attempted to distinguish between human public comments and ChatGPT-generated public comments (including ChatGPT emulated that of humans) using Japanese stylometric analysis. Study 1 conducted multidimensional scaling (MDS) to compare 500 texts of five classes: Human public comments, GPT-3.5 and GPT-4 generated public comments only by presenting the titles of human public comments (i.e., zero-shot learning, GPTzero), GPT-3.5 and GPT-4 emulated by presenting sentences of human public comments and instructing to emulate that (i.e., one-shot learning, GPTone). The MDS results showed that the Japanese stylometric features of the public comments were completely different from those of the GPTzero-generated texts. Moreover, GPTone-generated public comments were closer to those of humans than those generated by GPTzero. In Study 2, the performance levels of the random forest (RF) classifier for distinguishing three classes (human, GPTzero, and GPTone texts). RF classifiers showed the best precision for the human public comments of approximately 90%, and the best precision for the fake public comments generated by GPT (GPTzero and GPTone) was 99.5% by focusing on integrated next writing style features: phrase patterns, parts-of-speech (POS) bigram and trigram, and function words. Therefore, the current study concluded that we could discriminate between GPT-generated fake public comments and those written by humans at the present time.


Subject(s)
Disinformation , Learning , Humans , Japan , Government , Multidimensional Scaling Analysis
10.
Brasília; Conselho Nacional de Saúde; 14 mar. 2024. 3 p.
Non-conventional in Portuguese | CNS - Brazilian National Health Council | ID: biblio-1538232

ABSTRACT

Recomenda ao Congresso Nacional que o PL nº 2.630/2020 (PL das Fake News) seja aprovado, incorporando o relatório apresentado pelo relator, Deputado Orlando Silva, em busca do fortalecimento da democracia e valorização da saúde física e mental da população brasileira, buscando combater os discursos de ódio e a desinformação. Aos Conselhos de Saúde Municipais, Estaduais e do Distrito Federal que promovam atividades sobre os riscos da desinformação para a democracia brasileira. Plenário do Conselho Nacional de Saúde, em sua Trecentésima Quinquagésima Segunda Reunião Ordinária, realizada nos dias 13 e 14 de março de 2024.


Subject(s)
Journalism/legislation & jurisprudence , Democracy , Government Employees , Disinformation
14.
Article in German | MEDLINE | ID: mdl-38332143

ABSTRACT

Misinformation and disinformation in social media have become a challenge for effective public health measures. Here, we examine factors that influence believing and sharing false information, both misinformation and disinformation, at individual, social, and contextual levels and discuss intervention possibilities.At the individual level, knowledge deficits, lack of skills, and emotional motivation have been associated with believing in false information. Lower health literacy, a conspiracy mindset and certain beliefs increase susceptibility to false information. At the social level, the credibility of information sources and social norms influence the sharing of false information. At the contextual level, emotions and the repetition of messages affect belief in and sharing of false information.Interventions at the individual level involve measures to improve knowledge and skills. At the social level, addressing social processes and social norms can reduce the sharing of false information. At the contextual level, regulatory approaches involving social networks is considered an important point of intervention.Social inequalities play an important role in the exposure to and processing of misinformation. It remains unclear to which degree the susceptibility to belief in and share misinformation is an individual characteristic and/or context dependent. Complex interventions are required that should take into account multiple influencing factors.


Subject(s)
Health Communication , Social Media , Humans , Disinformation , Digital Health , Germany , Communication
16.
Neural Netw ; 172: 106115, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38219679

ABSTRACT

With the proliferation of social media, the detection of fake news has become a critical issue that poses a significant threat to society. The dissemination of fake information can lead to social harm and damage the credibility of information. To address this issue, deep learning has emerged as a promising approach, especially with the development of Natural Language Processing (NLP). This study introduces a novel approach called Graph Global Attention Network with Memory (GANM) for detecting fake news. This approach leverages NLP techniques to encode nodes with news context and user content. It employs three graph convolutional networks to extract informative features from the news propagation network and aggregates endogenous and exogenous user information. This methodology aims to address the challenge of identifying fake news within the context of social media. Innovatively, the GANM combines two strategies. First, a novel global attention mechanism with memory is employed in the GANM to learn the structural homogeneity of news propagation networks, which is the attention mechanism of a single graph with a history of all graphs. Second, we design a module for partial key information learning aggregation to emphasize the acquisition of partial key information in the graph and merge node-level embeddings with graph-level embeddings into fine-grained joint information. Our proposed method provides a new direction in news detection research with a combination of global and partial information and achieves promising performance on real-world datasets.


Subject(s)
Deep Learning , Social Media , Humans , Disinformation , Gene Regulatory Networks , Natural Language Processing
18.
Health Commun ; 39(3): 616-628, 2024 Mar.
Article in English | MEDLINE | ID: mdl-36794382

ABSTRACT

Health-related misinformation is a major threat to public health and particularly worrisome for populations experiencing health disparities. This study sets out to examine the prevalence, socio-psychological predictors, and consequences of beliefs in COVID-19 vaccine misinformation among unvaccinated Black Americans. We conducted an online national survey with Black Americans who had not been vaccinated against COVID-19 (N = 800) between February and March 2021. Results showed that beliefs in COVID-19 vaccine misinformation were prevalent among unvaccinated Black Americans with 13-19% of participants agreeing or strongly agreeing with various false claims about COVID-19 vaccines and 35-55% unsure about the veracity of these claims. Conservative ideology, conspiracy thinking mind-set, religiosity, and racial consciousness in health care settings predicted greater beliefs in COVID-19 vaccine misinformation, which were associated with lower vaccine confidence and acceptance. Theoretical and practical implications of the findings are discussed.


Subject(s)
COVID-19 Vaccines , COVID-19 , Health Knowledge, Attitudes, Practice , Humans , Black or African American , COVID-19/epidemiology , COVID-19/prevention & control , COVID-19 Vaccines/therapeutic use , Prevalence , Vaccination , Disinformation
19.
Curr Opin Psychol ; 55: 101732, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38070207

ABSTRACT

We synthesize evidence from 176 experimental estimates of 11 interventions intended to combat misinformation in the Global North and Global South, which we classify as informational, educational, sociopsychological, or institutional. Among these, we find the most consistent positive evidence for two informational interventions in both Global North and Global South contexts: inoculation/prebunking and debunking. In a complementary survey of 138 misinformation scholars and practitioners, we find that experts tend to be most optimistic about interventions that have been least widely studied or that have been shown to be mostly ineffective. We provide a searchable database of misinformation randomized controlled trials and suggest avenues for future research to close the gap between expert opinion and academic research.


Subject(s)
Communication , Humans , Disinformation
20.
Curr Opin Psychol ; 55: 101739, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38091666

ABSTRACT

Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people's engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source, and judging whether and how to react to the information (e.g., liking or sharing). We outline entry points for interventions at each stage and pinpoint the two early stages-source and information selection-as relatively neglected processes that should be addressed to further improve people's ability to contend with misinformation.


Subject(s)
Communication , Internet , Humans , Disinformation , Social Media
SELECTION OF CITATIONS
SEARCH DETAIL